A Hidden Markov Restless Multi-armed Bandit Model for Playout Recommendation Systems
نویسندگان
چکیده
We consider a restless multi-armed bandit (RMAB) in which there are two types of arms, say A and B. Each arm can be in one of two states, say 0 or 1. Playing a type A arm brings it to state 0 with probability one and not playing it induces state transitions with arm-dependent probabilities. Whereas playing a type B arm leads it to state 1 with probability 1 and not playing it gets state that dependent on transition probabilities of arm. Further, play of an arm generates a unit reward with a probability that depends on the state of the arm. The belief about the state of the arm can be calculated using a Bayesian update after every play. This RMAB has been designed for use in recommendation systems where the user’s preferences depend on the history of recommendations. This RMAB can also be used in applications like creating of playlists or placement of advertisements. In this paper we formulate the long term reward maximization problem as infinite horizon discounted reward and average reward problem. We analyse the RMAB by first studying discounted reward scenario. We show that it is Whittle-indexable and then obtain a closed form expression for the Whittle index for each arm calculated from the belief about its state and the parameters that describe the arm. We next analyse the average reward problem using vanishing discounted approach and derive the closed form expression for Whittle index. For a RMAB to be useful in practice, we need to be able to learn the parameters of the arms. We present an algorithm derived from Thompson sampling scheme, that learns the parameters of the arms and also illustrate its performance numerically.
منابع مشابه
On the Whittle Index for Restless Multi-armed Hidden Markov Bandits
We consider a restless multi-armed bandit in which each arm can be in one of two states. When an arm is sampled, the state of the arm is not available to the sampler. Instead, a binary signal with a known randomness that depends on the state of the arm is available. No signal is available if the arm is not sampled. An arm-dependent reward is accrued from each sampling. In each time step, each a...
متن کاملFinite dimensional algorithms for the hidden Markov model multi-armed bandit problem
The multi-arm bandit problem is widely used in scheduling of traffic in broadband networks, manufacturing systems and robotics. This paper presents a finite dimensional optimal solution to the multi-arm bandit problem for Hidden Markov Models. The key to solving any multi-arm bandit problem is to compute the Gittins index. In this paper a finite dimensional algorithm is presented which exactly ...
متن کاملOn Index Policies for Restless Bandit Problems
In this paper, we consider the restless bandit problem, which is one of the most well-studied generalizations of the celebrated stochastic multi-armed bandit problem in decision theory. In its ultimate generality, the restless bandit problem is known to be PSPACE-Hard to approximate to any non-trivial factor, and little progress has been made on this problem despite its significance in modeling...
متن کاملHidden Markov model multiarm bandits: a methodology for beam scheduling in multitarget tracking
In this paper, we derive optimal and suboptimal beam scheduling algorithms for electronically scanned array tracking systems. We formulate the scheduling problem as a multiarm bandit problem involving hidden Markov models (HMMs). A finite-dimensional optimal solution to this multiarm bandit problem is presented. The key to solving any multiarm bandit problem is to compute the Gittins index. We ...
متن کاملOn Optimality of Greedy Policy for a Class of Standard Reward Function of Restless Multi-armed Bandit Problem
In this paper,we consider the restless bandit problem, which is one of the most well-studied generalizations of the celebrated stochastic multi-armed bandit problem in decision theory. However, it is known be PSPACE-Hard to approximate to any non-trivial factor. Thus the optimality is very difficult to obtain due to its high complexity. A natural method is to obtain the greedy policy considerin...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2017